36 - 705 : Intermediate Statistics Fall 2017 Lecture 28 : November 8 Lecturer : Siva
نویسنده
چکیده
The other way to avoid the curse of dimensionality is to assume sparsity, this means that even though we have many covariates, the true regression function only (strongly) depends on a small number of relevant covariates. This is the type of setting we will focus on. More broadly, the main idea is that we want to think about practically relevant structural properties (like smoothness/sparsity) that we can exploit to get around the discouraging worst-case rates of convergence.
منابع مشابه
36 - 705 : Intermediate Statistics Fall 2017
It is worth keeping in mind the trade-off: Bayes estimators although easy to compute are somewhat subjective (in that they depend strongly on the prior π). Minimax estimators although more challenging to compute are not subjective, but do have the drawback that they are protecting against the worst-case which might lead to pessimistic conclusions, i.e. the minimax risk might be much higher than...
متن کامل10 - 705 : Intermediate Statistics Fall 2012 Homework 4 Solutions
Problem 1[C&B] 5.39 b. Using the normal approximation, we have µ v = r(1 − p)/p = 20(.3)/.7 = 8.57 and σ v = ï¿¿ r(1 − p)/p 2 = ï¿¿ (20)(.3)/.49 = 3.5.
متن کاملRandomness & Computation Fall 2011 Lecture 20 : November 1 Lecturer : Alistair Sinclair Based on scribe notes by :
In this lecture we use Azuma’s inequality to analyze the randomized Quicksort algorithm. Quicksort takes as input a set S of numbers, which can be assumed to be distinct without loss of generality, and sorts the set S as follows: it picks a pivot x ∈ S uniformly at random, then it partitions the set S as Sx = {y ∈ S | y > x}, and recursively sorts Sx . The fo...
متن کامل